release: 0.11.0#343
Conversation
364e2b2 to
b1d20d6
Compare
b1d20d6 to
b837eeb
Compare
b837eeb to
ac067a6
Compare
ac067a6 to
16b956f
Compare
16b956f to
2ea4386
Compare
2ea4386 to
3b9a668
Compare
| event_type = items[0].event_type | ||
| assert all(i.event_type == event_type for i in items), ( | ||
| "_process_items requires all items to share the same event_type; " | ||
| "callers must split START and END batches before dispatching." | ||
| ) |
There was a problem hiding this comment.
assert in production guard defeats data-corruption protection
The code comment correctly identifies this as a potential "silent data-corruption bug," but using assert for the guard means it is silently stripped when Python runs with the -O (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit if/raise instead.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/span_queue.py
Line: 107-111
Comment:
**`assert` in production guard defeats data-corruption protection**
The code comment correctly identifies this as a potential "silent data-corruption bug," but using `assert` for the guard means it is silently stripped when Python runs with the `-O` (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit `if/raise` instead.
How can I resolve this? If you propose a fix, please make it concise.| sgp_spans: list[SGPSpan] = [] | ||
| for span in spans: | ||
| self._add_source_to_span(span) | ||
| sgp_span = create_span( | ||
| name=span.name, | ||
| span_type=_get_span_type(span), | ||
| span_id=span.id, | ||
| parent_id=span.parent_id, | ||
| trace_id=span.trace_id, | ||
| input=span.input, | ||
| output=span.output, | ||
| metadata=span.data, | ||
| ) | ||
| sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr] | ||
| self._spans[span.id] = sgp_span | ||
| sgp_spans.append(sgp_span) | ||
|
|
||
| if self.disabled: | ||
| logger.warning("SGP is disabled, skipping span upsert") | ||
| return | ||
| # TODO(AGX1-198): Batch multiple spans into a single upsert_batch call | ||
| # instead of one span per HTTP request. | ||
| # https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans | ||
| await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr] | ||
| items=[sgp_span.to_request_params()] | ||
| items=[s.to_request_params() for s in sgp_spans] | ||
| ) |
There was a problem hiding this comment.
_spans populated before upsert — stale entries on HTTP failure
Spans are added to self._spans before the upsert_batch HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's _handle, but _spans already holds entries for spans whose start event was never delivered to SGP. A subsequent on_spans_end will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.
The old single-span code registered the span in _spans only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating _spans only after confirming the batch call succeeded, or rolling back entries on exception.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 141-163
Comment:
**`_spans` populated before upsert — stale entries on HTTP failure**
Spans are added to `self._spans` before the `upsert_batch` HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's `_handle`, but `_spans` already holds entries for spans whose start event was never delivered to SGP. A subsequent `on_spans_end` will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.
The old single-span code registered the span in `_spans` only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating `_spans` only after confirming the batch call succeeded, or rolling back entries on exception.
How can I resolve this? If you propose a fix, please make it concise.3b9a668 to
b702eb9
Compare
| sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr] | ||
| self._spans[span.id] = sgp_span | ||
| sgp_spans.append(sgp_span) | ||
|
|
||
| if self.disabled: | ||
| logger.warning("SGP is disabled, skipping span upsert") | ||
| return | ||
| # TODO(AGX1-198): Batch multiple spans into a single upsert_batch call | ||
| # instead of one span per HTTP request. | ||
| # https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans | ||
| await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr] | ||
| items=[sgp_span.to_request_params()] | ||
| items=[s.to_request_params() for s in sgp_spans] | ||
| ) |
There was a problem hiding this comment.
shutdown() crashes with AttributeError when disabled=True and spans are in-flight
on_spans_start now populates self._spans (line 155) before the if self.disabled: return guard (line 158). If any spans are started but not yet ended when shutdown() is called in disabled mode, it reaches self.sgp_async_client.spans.upsert_batch(...) where self.sgp_async_client is None, triggering an AttributeError. Before this PR the disabled path returned before populating _spans, so _spans was always empty at shutdown time and this was never triggered in practice. The fix is to either move the self._spans[span.id] = sgp_span assignment after the if self.disabled guard, or add an early if self.disabled: return check at the top of shutdown() (mirroring how on_spans_end handles it at line 184).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 154-163
Comment:
**`shutdown()` crashes with `AttributeError` when `disabled=True` and spans are in-flight**
`on_spans_start` now populates `self._spans` (line 155) **before** the `if self.disabled: return` guard (line 158). If any spans are started but not yet ended when `shutdown()` is called in disabled mode, it reaches `self.sgp_async_client.spans.upsert_batch(...)` where `self.sgp_async_client` is `None`, triggering an `AttributeError`. Before this PR the disabled path returned before populating `_spans`, so `_spans` was always empty at shutdown time and this was never triggered in practice. The fix is to either move the `self._spans[span.id] = sgp_span` assignment after the `if self.disabled` guard, or add an early `if self.disabled: return` check at the top of `shutdown()` (mirroring how `on_spans_end` handles it at line 184).
How can I resolve this? If you propose a fix, please make it concise.b702eb9 to
04eafa5
Compare
04eafa5 to
65af241
Compare
65af241 to
0a94229
Compare
Automated Release PR
0.11.0 (2026-05-05)
Full Changelog: v0.10.4...v0.11.0
Features
usage,response_id, plumbprevious_response_id, opt-inprompt_cache_keyfor stateful responses and prompt caching (#335) (ba5d64b)Chores
This pull request is managed by Stainless's GitHub App.
The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.
For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.
🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions
Greptile Summary
on_spans_start/on_spans_end) to the tracing processor interface andSGPAsyncTracingProcessor, collapsing per-span HTTP calls into a singleupsert_batchper drain cycle.SGPAsyncTracingProcessor.on_span_startnow delegates toon_spans_start, with the default interface fallback fanning out to single-span methods in parallel.Confidence Score: 3/5
Three P1 issues flagged in prior review threads remain unaddressed; the batching logic is correct but surrounding error-handling and disabled-path bugs carry real runtime risk.
Multiple P1 findings from prior threads are still present: shutdown() crashes with AttributeError when disabled=True and _spans is non-empty; _spans is mutated before upsert_batch confirms success leaving orphaned end-only events on network failure; and the assert guard in _process_items is silently removed under python -O. Any one of these caps confidence at 4/5; all three together pull it to 3/5.
src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py (shutdown disabled-path crash, stale _spans on HTTP failure) and src/agentex/lib/core/tracing/span_queue.py (assert stripped by -O)
Important Files Changed
Sequence Diagram
sequenceDiagram participant C as Caller participant Q as AsyncSpanQueue participant PI as AsyncTracingProcessor (base) participant SGP as SGPAsyncTracingProcessor Note over Q: Drain loop collects batch C->>Q: enqueue(START, span, [sgp_proc]) Q->>Q: _drain_loop accumulates batch Q->>Q: _process_items(starts) — groups by processor Q->>SGP: on_spans_start([span1, span2, …]) SGP->>SGP: _spans[id] = sgp_span (for each span) SGP-->>Q: upsert_batch(items=[…]) ← single HTTP call Note over Q: END batch processed after STARTs Q->>SGP: on_spans_end([span1, span2, …]) SGP->>SGP: _spans.pop(id) (for each span) SGP-->>Q: upsert_batch(items=[…]) ← single HTTP call Note over PI: Default fallback (non-overriding processors) Q->>PI: on_spans_start([span1, span2]) PI->>PI: asyncio.gather(on_span_start(s) for s in spans) PI-->>Q: per-span results logged on errorReviews (11): Last reviewed commit: "release: 0.11.0" | Re-trigger Greptile